-
Notifications
You must be signed in to change notification settings - Fork 86
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Deploy metadata in end to end tests #2186
base: development/2.10
Are you sure you want to change the base?
Conversation
Hello kerkesni,My role is to assist you with the merge of this Available options
Available commands
Status report is not available. |
Incorrect fix versionThe
Considering where you are trying to merge, I ignored possible hotfix versions and I expected to find:
Please check the |
e03ffd7
to
15d54c7
Compare
Waiting for approvalThe following approvals are needed before I can proceed with the merge:
|
2468755
to
ac95b9c
Compare
0efddbf
to
9055c5f
Compare
8f823ba
to
8e060ea
Compare
Metadata components need to be started in the order repd -> bucketd -> cloudserver as they are not able to recover on their own when a component they depend on is not available at startup. Issue: ZENKO-4414
Configuration of the test environement is done through an image built in the build-test-image job, this image contains the list of location details used to initialize the locations. This list isn't updated when only retrying the end to end test jobs, and the bucket is create using the env var passed to the pod. So we endup creating the bucket with the updated retry count but configure the location with the old bucket name which fails as pensieve won't be able to find that bucket. Issue: ZENKO-4414
8e060ea
to
5a84228
Compare
@@ -83,6 +83,11 @@ runs: | |||
shell: bash | |||
run: sh tests/smoke/deploy-sorbet-resources.sh end2end | |||
working-directory: ./.github/scripts/end2end/operator | |||
- name: Deploy metadata | |||
shell: bash | |||
run: bash deploy-metadata.sh |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We are already running with bash, so make it executable and execute just:
run: bash deploy-metadata.sh | |
run: ./deploy-metadata.sh |
Incorrect fix versionThe
Considering where you are trying to merge, I ignored possible hotfix versions and I expected to find:
Please check the |
shell: bash | ||
run: bash deploy-metadata.sh | ||
working-directory: ./.github/scripts/end2end | ||
if: ${{ env.ENABLE_RING_TESTS == 'true' }} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
better to have it as an input
rather than an env variable
(the input parameter may however takes its "default" value from env)
# wait for all bucketd pods to start serving port 9000 | ||
wait_for_all_pods_behind_services metadata-bucketd metadata 9000 60 | ||
|
||
# manually add "s3c.local" to the rest endpoints list as it's not configurable in the chart |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why do we need s3c.local
?
i did not need such patch on my artesca cluster (in namespace default
i think), just installed the chart and created the artesca location pointed at s3c-cloudserver.default.svc.cluster.local
# - scality-cloud | ||
env: | ||
ENABLE_RING_TESTS: "true" | ||
GIT_ACCESS_TOKEN: ${{ secrets.GIT_ACCESS_TOKEN }} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
credentials should not be added to env, esp. at such broad "scale" : best to pass them via inputs, in the place where they are needed
end2end-sharded
Issue: ZENKO-4414